Add Uno solver (unopy) to NLP solver CI workflow#165
Add Uno solver (unopy) to NLP solver CI workflow#165Transurgeon wants to merge 152 commits intomasterfrom
Conversation
initial attempts at adding a smooth canon for maximum
Ipopt interface prototype
adding more smooth canonicalizers
* adds oracles and bounds class to ipopt interface * adds some settings and solver lists changes for IPOPT * adds nlp solver option and can call ipopt * adds more experiments for integrating ipopt as a solver interface * passing the problem through the inversion * add some more extra changes * adding nlmatrixstuffing --------- Co-authored-by: William Zijie Zhang <william@gridmatic.com>
Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* adding many tests, new smoothcanon for min, and improvements to ipopt_nlpif * fixing last two tests * add another example, qcp * adding example for acopf * add control of a car example done --------- Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* update solution statuses thanks to odow * removes unusued solver information --------- Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* getting rocket landing example to work * add changes to the jacobian computation --------- Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* adding many more example of non-convex functions * making lots of progress on understanding good canonicalizations --------- Co-authored-by: William Zijie Zhang <william@gridmatic.com>
Co-authored-by: William Zijie Zhang <william@gridmatic.com>
# Conflicts: # README.md # cvxpy/atoms/__init__.py # cvxpy/problems/problem.py # cvxpy/reductions/solvers/defines.py # cvxpy/reductions/solvers/solving_chain.py # cvxpy/settings.py
* clarified 0 iteration termination * add tests * removed print statements * trigger CI --------- Co-authored-by: William Zijie Zhang <william@gridmatic.com>
…ing cvxpy#3146) # Conflicts: # cvxpy/reductions/cvx_attr2constr.py
* Rename ESR/HSR to linearizable_convex/linearizable_concave Spell out opaque acronyms for clarity per PR review feedback: is_atom_esr → is_atom_linearizable_convex, is_atom_hsr → is_atom_linearizable_concave, is_esr → is_linearizable_convex, is_hsr → is_linearizable_concave, is_smooth → is_linearizable. Docstrings clarify that "linearizable convex" means the expression is convex after linearizing all smooth subexpressions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Adopt three-way atom classification: smooth, nonsmooth-convex, nonsmooth-concave Replace the two-axis is_atom_linearizable_convex/concave overrides across all atoms with a single method from the paper's three categories: is_atom_smooth, is_atom_nonsmooth_convex, or is_atom_nonsmooth_concave. The base class derives the old linearizable methods for backward compatibility. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Inline atom-level linearizable checks into expression-level composition rules Remove the intermediate is_atom_linearizable_convex/concave methods and _has_dnlp_classification helper, which were only used in the two expression-level composition rules in atom.py. The three-way classification (smooth, nonsmooth-convex, nonsmooth-concave) is now used directly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Simplify is_atom_smooth docstring Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Propagate initial values to reduced variables in CvxAttr2Constr When CvxAttr2Constr creates reduced variables for dim-reducing attributes (e.g., diag), the original variable's initial value was not being lowered and assigned to the reduced variable. This caused NLP solvers to fail with "Variable has no value" during initial point construction. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Handle sparse values when propagating diag variable initials When a diag variable has its value set as a sparse matrix, extract the diagonal directly via scipy rather than passing it through np.diag which does not support sparse inputs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Revert "Handle sparse values when propagating diag variable initials" This reverts commit e422565. * Revert "Propagate initial values to reduced variables in CvxAttr2Constr" This reverts commit d02a758. * fix pnorm nonsmooth convex --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
When CvxAttr2Constr creates reduced variables for dimension-reducing attributes (e.g. diag=True), the initial value was not propagated to the reduced variable, causing NLP solvers to fail. This mirrors the existing value propagation already done for parameters. Also handle sparse diagonal matrices in lower_value() by using .diagonal() instead of np.diag() which doesn't accept sparse input. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
#153) * Make is_linearizable_convex/is_linearizable_concave abstract on Expression Enforce implementation in all Expression subclasses by uncommenting @abc.abstractmethod decorators. Add missing implementations to indicator (convex, not concave) and PartialProblem (delegates to is_convex/is_concave). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * remove is_smooth from max --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Separates test-only utility from production code by moving it to cvxpy/tests/nlp_tests/derivative_checker.py and updating all 19 test file imports. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Address review comments: remove redundant nonsmooth methods, add types, docs - Remove is_atom_nonsmooth_convex/is_atom_nonsmooth_concave from base Atom class and 10 atom subclasses; use existing is_atom_convex/is_atom_concave in DNLP composition rules instead - Add type annotations and docstrings to NLPsolver, Bounds, and Oracles in nlp_solver.py - Document Variable.sample_bounds with class-level type annotation and docstring - Revert GENERAL_PROJECTION_TOL back to 1e-10 (was loosened for removed IPOPT derivative checker) - Update CLAUDE.md atom classification docs to reflect simplified API Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Move sample_bounds docs from #: comments into Variable class docstring Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…156) * Extract NLP solving logic from problem.py into nlp_solving_chain.py Move the ~85-line NLP block from Problem._solve() and the two initial point methods into a dedicated module. This addresses PR review feedback: - NLP chain building, initial point logic, and solve orchestration now live in cvxpy/reductions/solvers/nlp_solving_chain.py - Use var.get_bounds() instead of var.bounds so sign attributes (nonneg, nonpos) are incorporated into bounds automatically - Initial point helpers are now private module-level functions instead of public Problem methods Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Update NLP initial point tests to use _set_nlp_initial_point Tests now import and call the module-level helper directly instead of the removed Problem.set_NLP_initial_point() method. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Remove debug print and expand comment in best_of loop Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Gate best_of print on verbose flag Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Address PR review comments on nlp_solving_chain extraction - Add circular import comment in problem.py explaining the deferred import - Move NLP_SOLVER_VARIANTS from nlp_solving_chain.py to defines.py - Set BOUNDED_VARIABLES = True on NLPsolver base class and use it in _build_nlp_chain (matching the conic solver pattern in solving_chain.py) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * adds changes to test_problem with parametrize * Revert verbose unpack comment to original single-line version Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Remove conda from CI, use uv + system IPOPT Replace conda-based test_nlp_solvers workflow with uv, installing IPOPT via system packages (apt on Ubuntu, brew on macOS) instead of conda-forge. Uncomment IPOPT optional dependency in pyproject.toml so uv sync --extra IPOPT works. Update installation docs in CLAUDE.md and README.md. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Keep IPOPT extra commented out to avoid --all-extras breakage The test_optional_solvers workflow uses uv sync --all-extras, which would try to build cyipopt without system IPOPT installed. Instead, install cyipopt directly via uv pip in test_nlp_solvers where the system library is available. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Remove UV_SYSTEM_PYTHON to fix externally-managed env error uv pip install fails on Ubuntu when UV_SYSTEM_PYTHON=1 because the system Python is externally managed. Removing it lets uv use its own venv created by uv sync. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Install LAPACK and BLAS dev libraries for cyipopt build on Ubuntu cyipopt links against LAPACK and BLAS which are not installed by default on Ubuntu runners. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Use uv venv + uv pip install instead of uv sync for NLP workflow uv sync manages a locked environment that doesn't play well with uv pip install for additional packages. Switch to uv venv + uv pip install to manage the environment directly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Includes upstream fixes for quad_form canonicalization, gradient handling, LDL factorization optimization, CI updates, and documentation improvements. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Same pattern as test_qcp — Uno's default FilterSQP preset struggles with these problems, but the IPM preset converges fine. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The macOS wheel has an ABI mismatch (cvanaret/Uno#485) that causes a silent import failure. This step makes it visible in CI logs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Benchmarks that have stayed the same: |
|
Hi @Transurgeon, |
I tried locally and it still doesn't seem to work unfortunately. |
|
@amontoison do you think it's trying to link some other HiGHS library that exists somewhere else? |
|
With the latest release of The AI is saying something totally wrong. |
yes I believe so.. I think pip automatically tries to get precompiled artifacts.
It is installing the latest, which is 0.2.1
I totally trust you, but it would be nice if you could double check that. Maybe the HiGHS binaries have been built using it seems like part of the string has a |
|
@Transurgeon we did have an issue with the wheels workflow for macOS, it compiled with full GNU instead of Clang+gfortran. |
it seems to work now, thanks a lot @cvanaret @amontoison ! |
|
@Transurgeon great :) I'm happy to address the issues with the ipopt preset (many iterations required) in a separate PR. |
|
Thanks @cvanaret ! |
Description
Please include a short summary of the change.
Testing the CI with new unopy release.
See issue here: cvanaret/Uno#485.
Let's wait until they fix it, since it still doesn't seem to work.
Also I had to change the solver to use the IPM method for the geo mean tests.. and Uno still takes over 1000 iterations :(.
Issue link (if applicable):
Type of change
Contribution checklist